25 research outputs found

    Establishing the validity of reading-into-writing test tasks for the UK academic context

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyThe present study aimed to establish a test development and validation framework of reading-into-writing tests to improve the accountability of using the integrated task type to assess test takers' ability in Academic English. This study applied Weir's (2005) socio-cognitive framework to collect three components of test validity: context validity, cognitive validity and criterion-related validity of two common types of reading-into-writing test tasks (essay task with multiple verbal inputs and essay task with multiple verbal and non-verbal inputs). Through literature review and a series of pilot, a set of contextual and cognitive parameters that are useful to explicitly describe the features of the target academic writing tasks and the cognitive processes required to complete these tasks successfully was defined at the pilot phase of this study. A mixed-method approach was used in the main study to establish the context, cognitive and criterion-related validity of the reading-into-writing test tasks. First of all, for context validity, expert judgement and automated textual analysis were applied to examine the degree of correspondence of the contextual features (overall task setting and input text features) of the reading-into-writing test tasks to those of the target academic writing tasks. For cognitive validity, a cognitive process questionnaire was developed to assist participants to report the processes they employed on the two reading-into-writing test tasks and two real-life academic tasks. A total of 443 questionnaires from 219 participants were collected. The analysis of the cognitive validity included three stands: 1) the cognitive processes involved in real-life academic writing, 2) the extent to which these processes are elicited by the reading-into-writing test tasks, and 3) the underlying structure of the processes elicited by the reading-into-writing test tasks. A range of descriptive, inferential and factor analyses were performed on the questionnaire data. The participants' scores on these real-life academic and reading-into-writing test tasks were collected for correlational analyses to investigate the criterion-related validity of the test tasks. The findings of the study support the context, cognitive and criterion-related validity of the integrated reading-into-writing task type. In terms of context validity, the two reading-into-writing tasks largely resembled the overall task setting, the input text features and the linguistic complexity of the input texts of the real-life tasks in a number of important ways. Regarding cognitive validity, the results revealed 11 cognitive processes involved in 5 phases of real-life academic writing as well as the extent to which these processes were elicited by the test tasks. Both reading-into-writing test tasks were able to elicit from high-achieving and low-achieving participants most of these cognitive processes to a similar extent as the participants employed the processes on the real-life tasks. The medium-achieving participants tended to employ these processes more on the real-life tasks than on the test tasks. The results of explanatory factor analysis showed that both test tasks were largely able to elicit from the participants the same underlying cognitive processes as the real-life tasks did. Lastly, for criterion-related validity, the correlations between the two reading-into-writing test scores and academic performance reported in this study are apparently better than most previously reported figures in the literature. To the best of the researcher's knowledge, this study is the first study to validate two types of reading-into-writing test tasks in terms of three validity components. The results of the study proved with empirical evidence that reading-into-writing tests can successfully operationalise the appropriate contextual features of academic writing tasks and the cognitive processes required in real-life academic writing under test conditions, and the reading-into-writing test scores demonstrated a promising correlation to the target academic performance. The results have important implications for university admissions officers and other stakeholders; in particular they demonstrate that the integrated reading-into-writing task type is a valid option when considering language teaching and testing for academic purposes. The study also puts forward a test framework with explicit contextual and cognitive parameters for language teachers, test developers and future researchers who intend to develop valid reading-into-writing test tasks for assessing academic writing ability and to conduct validity studies in such integrated task type

    Protocol for a scoping review of L2 learners’ cognitive processes research in language testing

    Get PDF
    Preprin

    Some evidence of the development of L2 reading-into-writing skills at three levels

    Get PDF
    While an integrated format has been widely incorporated into high-stakes writing assessment, there is relatively little research on students’ cognitive processing involved in integrated reading-into-writing tasks. Even research which reviews how the reading-into-writing construct is distinct from one level to the other is scarce. Using a writing process questionnaire, we examined and compared test takers’ cognitive processes on integrated reading-into-writing tasks at three levels. More specifically, the study aims to provide evidence of the predominant reading-into-writing processes appropriate at each level (i.e., the CEFR B1, B2, and C1 levels). The findings of the study reveal the core processes which are essential to the reading-into-writing construct at all three levels. There is also a clear progression of the reading-into-writing skills employed by the test takers across the three CEFR levels. A multiple regression analysis was used to examine the impact of the individual processes on predicting the writers’ level of reading-into-writing abilities. The findings provide empirical evidence concerning the cognitive validity of reading-into-writing tests and have important implications for task design and scoring at each level

    Paper-based vs computer-based writing assessment: divergent, equivalent or complementary?

    Get PDF
    Writing on a computer is now commonplace in most post-secondary educational contexts and workplaces, making research into computer-based writing assessment essential. This special issue of Assessing Writing includes a range of articles focusing on computer-based writing assessments. Some of these have been designed to parallel an existing paper-based assessment, others have been constructed as computer-based from the beginning. The selection of papers addresses various dimensions of the validity of computer-based writing assessment use in different contexts and across levels of L2 learner proficiency. First, three articles deal with the impact of these two delivery modes, paper-baser-based or computer-based, on test takers’ processing and performance in large-scale high-stakes writing tests; next, two articles explore the use of online writing assessment in higher education; the final two articles evaluate the use of technologies to provide feedback to support learning
    corecore